Empirical evaluation of the improved Rprop learning algorithms
نویسندگان
چکیده
The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing first-order learning methods for neural networks. We discuss modifications of this algorithm that improve its learning speed. The new optimization methods are empirically compared to the existing Rprop variants, the conjugate gradient method, Quickprop, and the BFGS algorithm on a set of neural network benchmark problems. The improved Rprop outperforms the other methods; only the BFGS performs better in the later stages of learning on some of the test problems. For the analysis of the local search behavior, we compare the Rprop algorithms on general hyperparabolic error landscapes, where the new variants confirm their improvement.
منابع مشابه
A New Learning Rates Adaptation Strategy for the Resilient Propagation Algorithm
In this paper we propose an Rprop modification that builds on a mathematical framework for the convergence analysis to equip Rprop with a learning rates adaptation strategy that ensures the search direction is a descent one. Our analysis is supported by experiments illustrating how the new learning rates adaptation strategy works in the test cases to ameliorate the convergence behaviour of the ...
متن کاملSign-based learning schemes for pattern classification
This paper introduces a new class of sign-based training algorithms for neural networks that combine the sign-based updates of the Rprop algorithm with the composite nonlinear Jacobi method. The theoretical foundations of the class are described and a heuristic Rprop-based Jacobi algorithm is empirically investigated through simulation experiments in benchmark pattern classification problems. N...
متن کاملAdapting Resilient Propagation for Deep Learning
The Resilient Propagation (Rprop) algorithm has been very popular for backpropagation training of multilayer feed-forward neural networks in various applications. The standard Rprop however encounters difficulties in the context of deep neural networks as typically happens with gradient-based learning algorithms. In this paper, we propose a modification of the Rprop that combines standard Rprop...
متن کاملNew globally convergent training scheme based on the resilient propagation algorithm
In this paper, a new globally convergent modification of the Resilient Propagation-Rprop algorithm is presented. This new addition to the Rprop family of methods builds on a mathematical framework for the convergence analysis that ensures that the adaptive local learning rates of the Rprop’s schedule generate a descent search direction at each iteration. Simulation results in six problems of th...
متن کاملExperimental Study on the Precision Requirements of Rbf, Rprop and Bptt Training
Most neurocomputer architectures support only xed point arithmetic which allows a higher degree of VLSI integration but limits the range and precision of all variables. Up to now the eeect of this limitation on neural network training algorithms has been studied only for standard models like SOM or BP. This paper presents the results of an experimental study in which the precision requirements ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Neurocomputing
دوره 50 شماره
صفحات -
تاریخ انتشار 2003